|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
GenericUDAFResolver2.AuthorizationException with the specified detail
message.
AuthorizationException with the specified cause.
ColumnarStructBase.init(BytesRefArrayWritable cols), while LazyStruct parse fields in a
lazy way.CommandProcessor interfaace.HiveStorageHandler.configureTableJobProperties(org.apache.hadoop.hive.ql.plan.TableDesc, java.util.Map) .
Object.hashCode().HiveStorageHandler
which supplies the standard defaults for all options.InputSplit to future operations.
IMetaStoreClient.dropTable(String, String, boolean, boolean).
This method will be removed in release 0.7.0.
org.apache.hadoop.hive.metastore.hadooorg.apache.hadoop.hive.metastore.MetaStoreEventListener
for testing purposes.o is a DoubleWritable with the same value.
ErrorMsg enum
that appears to be a match.
subtext from text in the
backing buffer, for avoiding string encoding and decoding.
InputFormat for Plain files with
Deserializer records.RecordReader for plain files with Deserializer records
Reads one row at a time of type R.FlatFileInputFormat.SerializationContext that reads the
Serialization class and specific subclass to be deserialized from the
JobConf.String command and generate a ASTNode tree.
GenericUDAFResolver2 instead.CONCAT_WS(sep,str1,str2,str3,...).ELT(N,str1,str2,str3,...).INSTR(str,substr).LOCATE(substr, str),
LOCATE(substr, str, start).Object.hashCode() to partition.
Compressor for the given CompressionCodec from the
pool or a new one.
RCFile.Reader.next(LongWritable)
first.
Decompressor for the given CompressionCodec from the
pool or a new one.
Utilities.getFileExtension(JobConf, boolean, HiveOutputFormat)
Serialization object for objects of type S.
IMetaStoreClient.getTable(String, String).
This method will be removed in release 0.7.0.
HiveMetaHook
for a given table.HiveOutputFormat that writes SequenceFiles with the
content saved in the keys, and null in the values.HiveOutputFormat describes the output-specification for Hive's
operators.Object.hashCode().HiveOutputFormat that writes SequenceFiles.HiveStorageHandler; it should only be implemented by handlers which
support decomposition of predicates being pushed down into table scans.HiveIgnoreKeyTextOutputFormat instead}IndexPredicateAnalyzer.DefaultGraphWalker using the rules.
IMetaStoreClient.
FSDataInputStream returned.
RCFiles, short of Record Columnar File, are flat files
consisting of binary key/value pairs, which shares much similarity with
SequenceFile.buffer.
length bytes from this DataInputStream and stores
them in byte array buffer starting at offset.
double value from this stream.
float value from this stream.
buffer.
buffer starting at the position offset.
long value from this stream.
short value from this stream.
byte value from this stream and
returns it as an int.
short value from this stream and
returns it as an int.
in.
Compressor to the pool.
Decompressor to the pool.
RewriteCanApplyProcFactory
to determine if any index can be used and if the input query
meets all the criteria for rewrite optimization.RewriteGBUsingIndex
to determine if the rewrite optimization can be applied to the input query.RewriteQueryUsingAggregateIndex
used to rewrite operator plan with index table instead of base table.count number of bytes in this stream.
HadoopThriftAuthBridge
HadoopThriftAuthBridge
IMetaStoreClient.tableExists(String, String).
This method will be removed in release 0.7.0.
HiveMetaStoreClient.newSynchronizedClient(org.apache.hadoop.hive.metastore.IMetaStoreClient).
StringReader.
|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||